273 research outputs found
Micro-CT Imaging of RGD-Conjugated Gold Nanorods Targeting Tumor In Vivo
Gold nanomaterials as computed tomography (CT) contrast agents at lower X-ray dosage to get a higher contrast have advantages of longer imaging time and lower toxic side effects compared to current contrast agents. As a receptor for Cyclo (Arg-Gly-Asp-D-Phe-Lys) (RGD) peptide, integrin αvβ3 is overexpressed on some tumor cells and tumor neovasculature. In this paper, we conjugated the RGD peptide on the surface of gold nanorods (AuNRs), designated as RGD-AuNRs, a promising candidate in applications such as tumor targeting and imaging capability for micro-CT imaging. Integrin αvβ3-positive U87 cells and integrin αvβ3-negative HT-29 cells were chosen to establish animal models relatedly and then texted the tumor targeting ability and imaging capability of RGD-AuNRs in vitro and in vivo. The MTT assay and stability measurement showed that RGD-conjugation eliminated their cytotoxicity and improved their biocompatibility and stability. Dark-field imaging of U87 cells and HT-29 cells testified the binding affinities and uptake abilities of RGD-AuNRs, and the results showed that RGD-AuNRs were more specifical to U87 cells. The enhanced micro-CT imaging contrast of intramuscular and subcutaneous injection illustrated the feasibility of RGD-AuNRs to be contrast agents. Furthermore, the micro-CT imaging of targeting U87 and HT-29 tumor models verified the targeting abilities of RGD-AuNRs
Clinical skin lesion diagnosis using representations inspired by dermatologist criteria
The skin is the largest organ in human body. Around 30%-70% of individuals worldwide have skin related health problems, for whom effective and efficient diagnosis is necessary. Recently, computer aided diagnosis (CAD) systems have been successfully applied to the recognition of skin cancers in dermatoscopic images. However, little work has concentrated on the commonly encountered skin diseases in clinical images captured by easily-accessed cameras or mobile phones. Meanwhile, for a CAD system, the representations of skin lesions are required to be understandable for dermatologists so that the predictions are convincing. To address this problem, we present effective representations inspired by the accepted dermatological criteria for diagnosing clinical skin lesions. We demonstrate that the dermatological criteria are highly correlated with measurable visual components. Accordingly, we design six medical representations considering different criteria for the recognition of skin lesions, and construct a diagnosis system for clinical skin disease images. Experimental results show that the proposed medical representations can not only capture the manifestations of skin lesions effectively, and consistently with the dermatological criteria, but also improve the prediction performance with respect to the state-of-the-art methods based on uninterpretable features
Alice Benchmarks: Connecting Real World Object Re-Identification with the Synthetic
For object re-identification (re-ID), learning from synthetic data has become
a promising strategy to cheaply acquire large-scale annotated datasets and
effective models, with few privacy concerns. Many interesting research problems
arise from this strategy, e.g., how to reduce the domain gap between synthetic
source and real-world target. To facilitate developing more new approaches in
learning from synthetic data, we introduce the Alice benchmarks, large-scale
datasets providing benchmarks as well as evaluation protocols to the research
community. Within the Alice benchmarks, two object re-ID tasks are offered:
person and vehicle re-ID. We collected and annotated two challenging real-world
target datasets: AlicePerson and AliceVehicle, captured under various
illuminations, image resolutions, etc. As an important feature of our real
target, the clusterability of its training set is not manually guaranteed to
make it closer to a real domain adaptation test scenario. Correspondingly, we
reuse existing PersonX and VehicleX as synthetic source domains. The primary
goal is to train models from synthetic data that can work effectively in the
real world. In this paper, we detail the settings of Alice benchmarks, provide
an analysis of existing commonly-used domain adaptation methods, and discuss
some interesting future directions. An online server will be set up for the
community to evaluate methods conveniently and fairly.Comment: 9 pages, 4 figures, 4 table
Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception?
Hand-crafted image quality metrics, such as PSNR and SSIM, are commonly used
to evaluate model privacy risk under reconstruction attacks. Under these
metrics, reconstructed images that are determined to resemble the original one
generally indicate more privacy leakage. Images determined as overall
dissimilar, on the other hand, indicate higher robustness against attack.
However, there is no guarantee that these metrics well reflect human opinions,
which, as a judgement for model privacy leakage, are more trustworthy. In this
paper, we comprehensively study the faithfulness of these hand-crafted metrics
to human perception of privacy information from the reconstructed images. On 5
datasets ranging from natural images, faces, to fine-grained classes, we use 4
existing attack methods to reconstruct images from many different
classification models and, for each reconstructed image, we ask multiple human
annotators to assess whether this image is recognizable. Our studies reveal
that the hand-crafted metrics only have a weak correlation with the human
evaluation of privacy leakage and that even these metrics themselves often
contradict each other. These observations suggest risks of current metrics in
the community. To address this potential risk, we propose a learning-based
measure called SemSim to evaluate the Semantic Similarity between the original
and reconstructed images. SemSim is trained with a standard triplet loss, using
an original image as an anchor, one of its recognizable reconstructed images as
a positive sample, and an unrecognizable one as a negative. By training on
human annotations, SemSim exhibits a greater reflection of privacy leakage on
the semantic level. We show that SemSim has a significantly higher correlation
with human judgment compared with existing metrics. Moreover, this strong
correlation generalizes to unseen datasets, models and attack methods.Comment: 15 pages, 9 figures and 3 table
Exploring Multi-Programming-Language Commits and Their Impacts on Software Quality: An Empirical Study on Apache Projects
Context: Modern software systems (e.g., Apache Spark) are usually written in
multiple programming languages (PLs). There is little understanding on the
phenomenon of multi-programming-language commits (MPLCs), which involve
modified source files written in multiple PLs. Objective: This work aims to
explore MPLCs and their impacts on development difficulty and software quality.
Methods: We performed an empirical study on eighteen non-trivial Apache
projects with 197,566 commits. Results: (1) the most commonly used PL
combination consists of all the four PLs, i.e., C/C++, Java, JavaScript, and
Python; (2) 9% of the commits from all the projects are MPLCs, and the
proportion of MPLCs in 83% of the projects goes to a relatively stable level;
(3) more than 90% of the MPLCs from all the projects involve source files in
two PLs; (4) the change complexity of MPLCs is significantly higher than that
of non-MPLCs; (5) issues fixed in MPLCs take significantly longer to be
resolved than issues fixed in non-MPLCs in 89% of the projects; (6) MPLCs do
not show significant effects on issue reopen; (7) source files undergoing MPLCs
tend to be more bug-prone; and (8) MPLCs introduce more bugs than non-MPLCs.
Conclusions: MPLCs are related to increased development difficulty and
decreased software quality.Comment: Preprint accepted for publication in Journal of Systems and Software,
2022. arXiv admin note: substantial text overlap with arXiv:2103.1169
Technical Debt Management in OSS Projects: An Empirical Study on GitHub
Technical debt (TD) refers to delayed tasks and immature artifacts that may
bring short-term benefits but incur extra costs of change during maintenance
and evolution in the long term. TD has been extensively studied in the past
decade, and numerous open source software (OSS) projects were used to explore
specific aspects of TD and validate various approaches for TD management (TDM).
However, there still lacks a comprehensive understanding on the practice of TDM
in OSS development, which penetrates the OSS community's perception of the TD
concept and how TD is managed in OSS development. To this end, we conducted an
empirical study on the whole GitHub to explore the adoption and execution of
TDM based on issues in OSS projects. We collected 35,278 issues labeled as TD
(TD issues) distributed over 3,598 repositories in total from the issue
tracking system of GitHub between 2009 and 2020. The findings are that: (1) the
OSS community is embracing the TD concept; (2) the analysis of TD instances
shows that TD may affect both internal and external quality of software
systems; (3) only one TD issue was identified in 31.1% of the repositories and
all TD issues were identified by only one developer in 69.0% of the
repositories; (4) TDM was ignored in 27.3% of the repositories after TD issues
were identified; and (5) among the repositories with TD labels, 32.9% have
abandoned TDM while only 8.2% adopt TDM as a consistent practice. These
findings provide valuable insights for practitioners in TDM and promising
research directions for further investigation.Comment: 15 pages, 8 images, 10 tables, Manuscript submitted to a Journal
(2022
- …